- North Korean Fake IT Workers Leverage GitHub to Build Personas
- Is this Windows all-in-one a worthy iMac M4 alternative? The 32-inch display says yes
- Is your disaster recovery a house of cards? Why BIA, BCP, and DRP are your foundation
- VMware Product Release Tracker (vTracker)
- I finally found a smartwatch with a timeless analog look - and the features I need
What is an AI server? Why artificial intelligence needs specialized systems

Always remember: Design AI infrastructure for scalability, so you can add more capability when you need it.
Comparison of different AI server models and configurations
All the major players — Nvidia, Supermicro, Google, Asus, Dell, Intel, HPE — as well as smaller entrants are offering purpose-built AI hardware. Here’s a look at tools powering AI servers:
– Graphics processing units (GPUs): These specialized electronic circuits were initially designed to support real-time graphics for gaming. But their capabilities have translated well to AI, and their strengths are in their high processing power, scalability, security, quick execution and graphics rendering.
– Data processing units (DPUs): These systems on a chip (SoC) combine a CPU with a high-performance network interface and acceleration engines that can parse, process and transfer data at the speed of the rest of the network to improve AI performance.
– Application-specific integrated circuits (ASICs): These integrated circuits (ICs) are custom-designed for particular tasks. They are offered as gate arrays (semi-custom to minimize upfront design work and cost) and full-custom (for more flexibility and to process greater workflows).
– Tensor processing units (TPUs): Designed by Google, these cloud-based ASICs are suitable for a broad range of aI workloads, from training to fine-tuning to inference.